Autonomous cars are indispensable when humans go further down the hands-free route. Although existing literature highlights that the acceptance of the autonomous car will increase if it drives in a human-like manner, sparse research offers the naturalistic experience from a passenger's seat perspective to examine the human likeness of current autonomous cars. The present study tested whether the AI driver could create a human-like ride experience for passengers based on 69 participants' feedback in a real-road scenario. We designed a ride experience-based version of the non-verbal Turing test for automated driving. Participants rode in autonomous cars (driven by either human or AI drivers) as a passenger and judged whether the driver was human or AI. The AI driver failed to pass our test because passengers detected the AI driver above chance. In contrast, when the human driver drove the car, the passengers' judgement was around chance. We further investigated how human passengers ascribe humanness in our test. Based on Lewin's field theory, we advanced a computational model combining signal detection theory with pre-trained language models to predict passengers' humanness rating behaviour. We employed affective transition between pre-study baseline emotions and corresponding post-stage emotions as the signal strength of our model. Results showed that the passengers' ascription of humanness would increase with the greater affective transition. Our study suggested an important role of affective transition in passengers' ascription of humanness, which might become a future direction for autonomous driving.
translated by 谷歌翻译
基于模型的单图像去悬算算法恢复了带有尖锐边缘的无雾图像和真实世界的朦胧图像的丰富细节,但以低psnr和ssim值的牺牲来为合成朦胧的图像。数据驱动的图像恢复具有高PSNR和SSIM值的无雾图图像,用于合成朦胧的图像,但对比度低,甚至对于现实世界中的朦胧图像而言,甚至剩下的雾霾。在本文中,通过组合基于模型和数据驱动的方法来引入一种新型的单图像飞行算法。传输图和大气光都是首先通过基于模型的方法估算的,然后通过基于双尺度生成对抗网络(GAN)的方法进行完善。所得算法形成一种神经增强,在相应的数据驱动方法可能不会收敛的同时,该算法的收敛非常快。通过使用估计的传输图和大气光以及KoschmiederLaw来恢复无雾图像。实验结果表明,所提出的算法可以从现实世界和合成的朦胧图像中井除雾霾。
translated by 谷歌翻译
人工智能和神经科学都深受互动。人工神经网络(ANNS)是一种多功能的工具,用于研究腹侧视觉流中的神经表现,以及神经科学中的知识返回激发了ANN模型,以提高任务的性能。但是,如何将这两个方向合并到统一模型中较少研究。这里,我们提出了一种混合模型,称为深度自动编码器,具有神经响应(DAE-NR),其将来自视觉皮质的信息包含在ANN中,以实现生物和人造神经元之间的更好的图像重建和更高的神经表示相似性。具体地,对小鼠脑和DAE-NR的输入相同的视觉刺激(即自然图像)。 DAE-NR共同学会通过映射函数将编码器网络的特定层映射到腹侧视觉流中的生物神经响应,并通过解码器重建视觉输入。我们的实验表明,如果只有在联合学习,DAE-NRS可以(i)可以提高图像重建的性能,并且(ii)增加生物神经元和人工神经元之间的代表性相似性。 DAE-NR提供了一种关于计算机视觉和视觉神经科学集成的新视角。
translated by 谷歌翻译
基于模型的单幅图像脱水算法用尖锐的边缘和丰富的细节恢复图像,以牺牲低PSNR值。数据驱动的那些恢复具有高PSNR值的图像,但具有低对比度,甚至一些剩余的阴霾。在本文中,通过融合基于模型和数据驱动的方法来引入新颖的单图像脱水算法。通过基于模型的方法初始化透射图和大气光,并通过构成神经增强的深度学习方法来精制。通过使用传输地图和大气光来恢复无雾图像。实验结果表明,该算法可以从现实世界和合成朦胧图像中脱离雾度。
translated by 谷歌翻译
Sensory and emotional experiences such as pain and empathy are essential for mental and physical health. Cognitive neuroscience has been working on revealing mechanisms underlying pain and empathy. Furthermore, as trending research areas, computational pain recognition and empathic artificial intelligence (AI) show progress and promise for healthcare or human-computer interaction. Although AI research has recently made it increasingly possible to create artificial systems with affective processing, most cognitive neuroscience and AI research do not jointly address the issues of empathy in AI and cognitive neuroscience. The main aim of this paper is to introduce key advances, cognitive challenges and technical barriers in computational pain recognition and the implementation of artificial empathy. Our discussion covers the following topics: How can AI recognize pain from unimodal and multimodal information? Is it crucial for AI to be empathic? What are the benefits and challenges of empathic AI? Despite some consensus on the importance of AI, including empathic recognition and responses, we also highlight future challenges for artificial empathy and possible paths from interdisciplinary perspectives. Furthermore, we discuss challenges for responsible evaluation of cognitive methods and computational techniques and show approaches to future work to contribute to affective assistants capable of empathy.
translated by 谷歌翻译
机器学习在医学图像分析中发挥着越来越重要的作用,产卵在神经影像症的临床应用中的新进展。之前有一些关于机器学习和癫痫的综述,它们主要专注于电生理信号,如脑电图(EEG)和立体脑电图(SEENG),同时忽略癫痫研究中神经影像的潜力。 NeuroImaging在确认癫痫区域的范围内具有重要的优点,这对于手术后的前诊所评估和评估至关重要。然而,脑电图难以定位大脑中的准确癫痫病变区。在这篇综述中,我们强调了癫痫诊断和预后在癫痫诊断和预后的背景下神经影像学和机器学习的相互作用。我们首先概述癫痫诊所,MRI,DWI,FMRI和PET中使用的癫痫和典型的神经影像姿态。然后,我们在将机器学习方法应用于神经影像数据的方法:i)将手动特征工程和分类器的传统机器学习方法阐述了两种方法,即卷积神经网络和自动化器等深度学习方法。随后,详细地研究了对癫痫,定位和横向化任务等分割,本地化和横向化任务的应用,以及与诊断和预后直接相关的任务。最后,我们讨论了目前的成就,挑战和潜在的未来方向,希望为癫痫的计算机辅助诊断和预后铺平道路。
translated by 谷歌翻译
The rise in data has led to the need for dimension reduction techniques, especially in the area of non-scalar variables, including time series, natural language processing, and computer vision. In this paper, we specifically investigate dimension reduction for time series through functional data analysis. Current methods for dimension reduction in functional data are functional principal component analysis and functional autoencoders, which are limited to linear mappings or scalar representations for the time series, which is inefficient. In real data applications, the nature of the data is much more complex. We propose a non-linear function-on-function approach, which consists of a functional encoder and a functional decoder, that uses continuous hidden layers consisting of continuous neurons to learn the structure inherent in functional data, which addresses the aforementioned concerns in the existing approaches. Our approach gives a low dimension latent representation by reducing the number of functional features as well as the timepoints at which the functions are observed. The effectiveness of the proposed model is demonstrated through multiple simulations and real data examples.
translated by 谷歌翻译
The high feature dimensionality is a challenge in music emotion recognition. There is no common consensus on a relation between audio features and emotion. The MER system uses all available features to recognize emotion; however, this is not an optimal solution since it contains irrelevant data acting as noise. In this paper, we introduce a feature selection approach to eliminate redundant features for MER. We created a Selected Feature Set (SFS) based on the feature selection algorithm (FSA) and benchmarked it by training with two models, Support Vector Regression (SVR) and Random Forest (RF) and comparing them against with using the Complete Feature Set (CFS). The result indicates that the performance of MER has improved for both Random Forest (RF) and Support Vector Regression (SVR) models by using SFS. We found using FSA can improve performance in all scenarios, and it has potential benefits for model efficiency and stability for MER task.
translated by 谷歌翻译
Detecting abnormal crowd motion emerging from complex interactions of individuals is paramount to ensure the safety of crowds. Crowd-level abnormal behaviors (CABs), e.g., counter flow and crowd turbulence, are proven to be the crucial causes of many crowd disasters. In the recent decade, video anomaly detection (VAD) techniques have achieved remarkable success in detecting individual-level abnormal behaviors (e.g., sudden running, fighting and stealing), but research on VAD for CABs is rather limited. Unlike individual-level anomaly, CABs usually do not exhibit salient difference from the normal behaviors when observed locally, and the scale of CABs could vary from one scenario to another. In this paper, we present a systematic study to tackle the important problem of VAD for CABs with a novel crowd motion learning framework, multi-scale motion consistency network (MSMC-Net). MSMC-Net first captures the spatial and temporal crowd motion consistency information in a graph representation. Then, it simultaneously trains multiple feature graphs constructed at different scales to capture rich crowd patterns. An attention network is used to adaptively fuse the multi-scale features for better CAB detection. For the empirical study, we consider three large-scale crowd event datasets, UMN, Hajj and Love Parade. Experimental results show that MSMC-Net could substantially improve the state-of-the-art performance on all the datasets.
translated by 谷歌翻译
Achieving accurate and automated tumor segmentation plays an important role in both clinical practice and radiomics research. Segmentation in medicine is now often performed manually by experts, which is a laborious, expensive and error-prone task. Manual annotation relies heavily on the experience and knowledge of these experts. In addition, there is much intra- and interobserver variation. Therefore, it is of great significance to develop a method that can automatically segment tumor target regions. In this paper, we propose a deep learning segmentation method based on multimodal positron emission tomography-computed tomography (PET-CT), which combines the high sensitivity of PET and the precise anatomical information of CT. We design an improved spatial attention network(ISA-Net) to increase the accuracy of PET or CT in detecting tumors, which uses multi-scale convolution operation to extract feature information and can highlight the tumor region location information and suppress the non-tumor region location information. In addition, our network uses dual-channel inputs in the coding stage and fuses them in the decoding stage, which can take advantage of the differences and complementarities between PET and CT. We validated the proposed ISA-Net method on two clinical datasets, a soft tissue sarcoma(STS) and a head and neck tumor(HECKTOR) dataset, and compared with other attention methods for tumor segmentation. The DSC score of 0.8378 on STS dataset and 0.8076 on HECKTOR dataset show that ISA-Net method achieves better segmentation performance and has better generalization. Conclusions: The method proposed in this paper is based on multi-modal medical image tumor segmentation, which can effectively utilize the difference and complementarity of different modes. The method can also be applied to other multi-modal data or single-modal data by proper adjustment.
translated by 谷歌翻译